635 research outputs found

    Symmetric tensor decomposition

    Get PDF
    We present an algorithm for decomposing a symmetric tensor, of dimension n and order d as a sum of rank-1 symmetric tensors, extending the algorithm of Sylvester devised in 1886 for binary forms. We recall the correspondence between the decomposition of a homogeneous polynomial in n variables of total degree d as a sum of powers of linear forms (Waring's problem), incidence properties on secant varieties of the Veronese Variety and the representation of linear forms as a linear combination of evaluations at distinct points. Then we reformulate Sylvester's approach from the dual point of view. Exploiting this duality, we propose necessary and sufficient conditions for the existence of such a decomposition of a given rank, using the properties of Hankel (and quasi-Hankel) matrices, derived from multivariate polynomials and normal form computations. This leads to the resolution of polynomial equations of small degree in non-generic cases. We propose a new algorithm for symmetric tensor decomposition, based on this characterization and on linear algebra computations with these Hankel matrices. The impact of this contribution is two-fold. First it permits an efficient computation of the decomposition of any tensor of sub-generic rank, as opposed to widely used iterative algorithms with unproved global convergence (e.g. Alternate Least Squares or gradient descents). Second, it gives tools for understanding uniqueness conditions, and for detecting the rank

    Supervised classification by multilayer networks

    Get PDF
    The Multi-Layer Perceptron (PMC in French) is one of the neural networks the most widely used, particularly for supervised classification . First, existing results on general representation capabilities enjoyed by the PMC architecture are surveyed, independently of any learning algorithm . Then it is shown why the minimization of a quadratic error over the learning set seems an awkward optimization criterion, though some asymptotic properties are also proved . In a second stage, the bayesian approach is analyzed when leaming sets offinite size are at disposai. ~ith the help of certain density estimators whose basic properties are emphasized, it is possible to build a feed forward neural network implementing the bayesian classification . This technique of direct discrimination seems to perform better Chan the classical MLP in all respects despite of the similarities of the architectures .Le Perceptron MultiCouche (PMC) est un des réseaux de neurones les plus utilisés actuellement, pour la classification supervisée notamment. On fait dans un premier temps une synthèse des résultats acquis en matière de capacités de représentation dont jouit potentiellement l'architecture PMC, indépendamment de tout algorithme d'apprentissage. Puis on montre pourquoi la minimisation d'une erreur quadratique sur la base d'apprentissage semble être un critère mal approprié, bien que certaines propriétés asymptotiques soient aussi exhibées. Dans un second temps, l'approche bayésienne est analysée lorsqu'on ne dispose que d'une base d'apprentissage de taille fini

    An asymptotic bound for secant varieties of Segre varieties

    Full text link
    This paper studies the defectivity of secant varieties of Segre varieties. We prove that there exists an asymptotic lower estimate for the greater non-defective secant variety (without filling the ambient space) of any given Segre variety. In particular, we prove that the ratio between the greater non-defective secant variety of a Segre variety and its expected rank is lower bounded by a value depending just on the number of factors of the Segre variety. Moreover, in the final section, we present some results obtained by explicit computation, proving the non-defectivity of all the secant varieties of Segre varieties of the shape (P^n)^4, with 1 < n < 11, except at most \sigma_199((P^8)^4) and \sigma_357((P^10)^4).Comment: 14 page

    Complex multivariable estimation

    Get PDF
    Moyenne statistique de la fonction S(X,θ), variance des estimateurs de θ, borne inférieur

    Circularity and discrete-time random signals

    Get PDF
    Complex random variables encountered in signal processing are often the result of a Fourier transform of real signals. As a consequence, they are particular complex variables, and enjoy so-called circularity properties. After a summary of basic definitions including introduction of complex variables, several definitions of circularity are proposed. It is then emphasized that the Fourier transform of some stationary random signals leads to circular complex variables .Les variables aléatoires complexes rencontrées en traitement du signal proviennent souvent de la transformée de Fourier de signaux réels. De ce fait, elles ne sont pas des variables complexes quelconques, mais jouissent de la propriété dite de circularité. Après avoir résumé quelques définitions et introduit les variables aléatoires complexes, et plusieurs définitions de circularité sont proposées. Il est ensuite souligné que la Transformée de Fourier de certains signaux aléatoires stationnaires conduit à des variables complexes circulaire

    Finding Exogenous Variables in Data with Many More Variables than Observations

    Full text link
    Many statistical methods have been proposed to estimate causal models in classical situations with fewer variables than observations (p<n, p: the number of variables and n: the number of observations). However, modern datasets including gene expression data need high-dimensional causal modeling in challenging situations with orders of magnitude more variables than observations (p>>n). In this paper, we propose a method to find exogenous variables in a linear non-Gaussian causal model, which requires much smaller sample sizes than conventional methods and works even when p>>n. The key idea is to identify which variables are exogenous based on non-Gaussianity instead of estimating the entire structure of the model. Exogenous variables work as triggers that activate a causal chain in the model, and their identification leads to more efficient experimental designs and better understanding of the causal mechanism. We present experiments with artificial data and real-world gene expression data to evaluate the method.Comment: A revised version of this was published in Proc. ICANN201

    Fourier PCA and Robust Tensor Decomposition

    Full text link
    Fourier PCA is Principal Component Analysis of a matrix obtained from higher order derivatives of the logarithm of the Fourier transform of a distribution.We make this method algorithmic by developing a tensor decomposition method for a pair of tensors sharing the same vectors in rank-11 decompositions. Our main application is the first provably polynomial-time algorithm for underdetermined ICA, i.e., learning an n×mn \times m matrix AA from observations y=Axy=Ax where xx is drawn from an unknown product distribution with arbitrary non-Gaussian components. The number of component distributions mm can be arbitrarily higher than the dimension nn and the columns of AA only need to satisfy a natural and efficiently verifiable nondegeneracy condition. As a second application, we give an alternative algorithm for learning mixtures of spherical Gaussians with linearly independent means. These results also hold in the presence of Gaussian noise.Comment: Extensively revised; details added; minor errors corrected; exposition improve

    A polynomial based approach to extract the maxima of an antipodally symmetric spherical function and its application to extract fiber directions from the Orientation Distribution Function in Diffusion MRI

    Get PDF
    International audienceIn this paper we extract the geometric characteristics from an antipodally symmetric spherical function (ASSF), which can be de- scribed equivalently in the spherical harmonic (SH) basis, in the symmet- ric tensor (ST) basis constrained to the sphere, and in the homogeneous polynomial (HP) basis constrained to the sphere. All three bases span the same vector space and are bijective when the rank of the SH series equals the order of the ST and equals the degree of the HP. We show, therefore, how it is possible to extract the maxima and minima of an ASSF by computing the stationary points of a constrained HP. In Diffusion MRI, the Orientation Distribution Function (ODF), repre- sents a state of the art reconstruction method whose maxima are aligned with the dominant fiber bundles. It is, therefore, important to be able to correctly estimate these maxima to detect the fiber directions. The ODF is an ASSF. To illustrate the potential of our method, we take up the example of the ODF, and extract its maxima to detect the fiber directions. Thanks to our method we are able to extract the maxima without limiting our search to a discrete set of values on the sphere, but by searching the maxima of a continuous function. Our method is also general, not dependent on the ODF, and the framework we present can be applied to any ASSF described in one of the three bases

    Segmentation non canonique pour la classification supervisee par arbres

    Get PDF
    corecore